7. Basic fit setup and numerics#
7.1. Init and pulling data#
Here the setup is mainly handled by some basic scripts, these follow the outline in the PEMtk documentation [11], see in particular the intro to fitting.
Show code cell content
# Run default config - may need to set full path here
%run '../scripts/setup_notebook.py'
# Override plotters backend?
# plotBackend = 'pl'
*** Setting up notebook with standard Quantum Metrology Vol. 3 imports...
For more details see https://pemtk.readthedocs.io/en/latest/fitting/PEMtk_fitting_basic_demo_030621-full.html
To use local source code, pass the parent path to this script at run time, e.g. "setup_fit_demo ~/github"
*** Running: 2023-04-28 09:37:48
Working dir: /home/jovyan/jake-home/buildTmp/_latest_build/html/doc-source/part2
Build env: html
* Loading packages...
* sparse not found, sparse matrix forms not available.
* natsort not found, some sorting functions not available.
* Setting plotter defaults with epsproc.basicPlotters.setPlotters(). Run directly to modify, or change options in local env.
* Set Holoviews with bokeh.
* pyevtk not found, VTK export not available.
* Set Holoviews with bokeh.
Jupyter Book : 0.15.1
External ToC : 0.3.1
MyST-Parser : 0.18.1
MyST-NB : 0.17.2
Sphinx Book Theme : 1.0.1
Jupyter-Cache : 0.6.1
NbClient : 0.5.4
OMP: Info #271: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
import wget
from pathlib import Path
dataName = 'n2fitting'
# Pull N2 data from ePSproc Github repo
# URLs for test ePSproc datasets - n2
# For more datasets use ePSdata, see https://epsproc.readthedocs.io/en/dev/demos/ePSdata_download_demo_300720.html
urls = {'n2PU':"https://github.com/phockett/ePSproc/blob/master/data/photoionization/n2_multiorb/n2_1pu_0.1-50.1eV_A2.inp.out",
'n2SU':"https://github.com/phockett/ePSproc/blob/master/data/photoionization/n2_multiorb/n2_3sg_0.1-50.1eV_A2.inp.out",
'n2ADMs':"https://github.com/phockett/ePSproc/blob/master/data/alignment/N2_ADM_VM_290816.mat",
'demoScript':"https://github.com/phockett/PEMtk/blob/master/demos/fitting/setup_fit_demo.py"}
# Set data dir
dataPath = Path(Path.cwd(), dataName)
# Create and pull files if dir not present (NOTE doesn't check per file here)
if not dataPath.is_dir():
dataPath.mkdir()
# Pull files with wget
for k,v in urls.items():
wget.download(v+'?raw=true',out=dataPath.as_posix()) # For Github add '?raw=true' to URL
# List files
list(dataPath.glob('*.out')) + list(dataPath.glob('*.mat'))
[PosixPath('/home/jovyan/jake-home/buildTmp/_latest_build/html/doc-source/part2/n2fitting/n2_1pu_0.1-50.1eV_A2.inp.out'),
PosixPath('/home/jovyan/jake-home/buildTmp/_latest_build/html/doc-source/part2/n2fitting/n2_3sg_0.1-50.1eV_A2.inp.out'),
PosixPath('/home/jovyan/jake-home/buildTmp/_latest_build/html/doc-source/part2/n2fitting/N2_ADM_VM_290816.mat')]
7.2. Setup with options#
Following the PEMtk documentation [11], the fitting workspace can be configured by setting:
A fitting basis set, either from computational matrix elements, from symmetry constraints, or manually. (See Sect. 6 for more discussion.)
Data to fit. In the examples herein synthetic data will be created by adding noise to computational results.
ADMs to use for the fit. Again these may be from computational results, or set manually. If not specified these will default to an isotropic distribution, which may be appropriate in some cases.
# Init class object
data = pemtkFit(fileBase = dataPath, verbose = 1)
# Read data files
data.scanFiles()
# data.jobsSummary()
*** Job subset details
Key: subset
No 'job' info set for self.data[subset].
*** Job orb6 details
Key: orb6
Dir /home/jovyan/jake-home/buildTmp/_latest_build/html/doc-source/part2/n2fitting, 1 file(s).
{ 'batch': 'ePS n2, batch n2_1pu_0.1-50.1eV, orbital A2',
'event': ' N2 A-state (1piu-1)',
'orbE': -17.09691397835426,
'orbLabel': '1piu-1'}
*** Job orb5 details
Key: orb5
Dir /home/jovyan/jake-home/buildTmp/_latest_build/html/doc-source/part2/n2fitting, 1 file(s).
{ 'batch': 'ePS n2, batch n2_3sg_0.1-50.1eV, orbital A2',
'event': ' N2 X-state (3sg-1)',
'orbE': -17.34181645456815,
'orbLabel': '3sg-1'}
7.2.1. Alignment distribution moments (ADMs)#
The class wraps ep.setADMs(). This returns an isotropic distribution by default, or values can be set explicitly from a list. Values are set in self.data['ADM'].
Note: if this is not set, the default value will be used, which is likely not very useful for the fit!
# Default case
data.setADMs()
# data.ADM['ADMX']
data.data['ADM']['ADM']
<xarray.DataArray 'ADM' (ADM: 1, t: 1)>
array([[1]])
Coordinates:
* ADM (ADM) MultiIndex
- K (ADM) int64 0
- Q (ADM) int64 0
- S (ADM) int64 0
* t (t) int64 0
Attributes:
dataType: ADM
long_name: Axis distribution moments
units: arb# Load time-dependent ADMs for N2 case
# Adapted from ePSproc_AFBLM_testing_010519_300719.m
from scipy.io import loadmat
ADMdataFile = os.path.join(dataPath, 'N2_ADM_VM_290816.mat')
ADMs = loadmat(ADMdataFile)
# Set tOffset for calcs, 3.76ps!!!
# This is because this is 2-pulse case, and will set t=0 to 2nd pulse (and matches defn. in N2 experimental paper)
tOffset = -3.76
ADMs['time'] = ADMs['time'] + tOffset
data.setADMs(ADMs = ADMs['ADM'], t=ADMs['time'].squeeze(), KQSLabels = ADMs['ADMlist'], addS = True)
data.data['ADM']['ADM']
<xarray.DataArray 'ADM' (ADM: 4, t: 3691)>
array([[ 1.00000000e+00+0.00000000e+00j, 1.00000000e+00+0.00000000e+00j,
1.00000000e+00+0.00000000e+00j, ...,
1.00000000e+00+0.00000000e+00j, 1.00000000e+00+0.00000000e+00j,
1.00000000e+00+0.00000000e+00j],
[-2.26243113e-17+0.00000000e+00j, 2.43430608e-08+1.04125246e-20j,
9.80188266e-08+6.89166168e-20j, ...,
1.05433798e-01-1.62495135e-18j, 1.05433798e-01-1.62495135e-18j,
1.05433798e-01-1.62495135e-18j],
[ 1.55724057e-16+0.00000000e+00j, -3.37021111e-10-6.81416260e-20j,
1.95424253e-10-3.10513374e-19j, ...,
8.39913132e-02-5.12795441e-17j, 8.39913132e-02-5.12795441e-17j,
8.39913132e-02-5.12795441e-17j],
[-7.68430227e-16+0.00000000e+00j, -1.40177466e-11+1.04987400e-19j,
6.33419102e-10+1.74747003e-18j, ...,
3.78131657e-02+4.01318983e-16j, 3.78131657e-02+4.01318983e-16j,
3.78131657e-02+4.01318983e-16j]])
Coordinates:
* ADM (ADM) MultiIndex
- K (ADM) int64 0 2 4 6
- Q (ADM) int64 0 0 0 0
- S (ADM) int64 0 0 0 0
* t (t) float64 -3.76 -3.76 -3.76 -3.759 -3.759 ... 10.1 10.1 10.1 10.1
Attributes:
dataType: ADM
long_name: Axis distribution moments
units: arb# Manual plot with hvplot
key = 'ADM'
dataType='ADM'
data.data[key][dataType].unstack().real.hvplot.line(x='t').overlay(['K','Q','S'])
# Wrapper OK with Matplotlib, but needs work for hv case (see below)
%matplotlib inline
data.ADMplot(keys = 'ADM')
Dataset: ADM, ADM
7.2.2. Polarisation geometry/ies#
This wraps ep.setPolGeoms. This defaults to (x,y,z) polarization geometries. Values are set in self.data['pol'].
Note: if this is not set, the default value will be used, which is likely not very useful for the fit!
data.setPolGeoms()
data.data['pol']['pol']
<xarray.DataArray (Labels: 3)>
array([quaternion(1, -0, 0, 0),
quaternion(0.707106781186548, -0, 0.707106781186547, 0),
quaternion(0.5, -0.5, 0.5, 0.5)], dtype=quaternion)
Coordinates:
Euler (Labels) object (0.0, 0.0, 0.0) ... (1.5707963267948966, 1.57079...
* Labels (Labels) <U32 'z' 'x' 'y'
Attributes:
dataType: Euler7.2.3. Subselect data#
Currently handled in the class by setting self.selOpts, this allows for simple reuse of settings as required. Subselected data is set to self.data['subset'][dataType], and is the data the fitting routine will use.
# Settings for type subselection are in selOpts[dataType]
# E.g. Matrix element sub-selection
data.selOpts['matE'] = {'thres': 0.01, 'inds': {'Type':'L', 'Eke':1.1}}
data.setSubset(dataKey = 'orb5', dataType = 'matE') # Subselect from 'orb5' dataset, matrix elements
# Show subselected data
# data.data['subset']['matE']
# Tabulate the matrix elements
# Not showing as nice table for singleton case - pd.series vs. dataframe?
# data.matEtoPD(keys = 'subset', xDim = 'Sym', drop=False)
# And for the polarisation geometries...
data.selOpts['pol'] = {'inds': {'Labels': 'z'}}
data.setSubset(dataKey = 'pol', dataType = 'pol')
# And for the ADMs...
data.selOpts['ADM'] = {} #{'thres': 0.01, 'inds': {'Type':'L', 'Eke':1.1}}
data.setSubset(dataKey = 'ADM', dataType = 'ADM', sliceParams = {'t':[4, 5, 4]})
Subselected from dataset 'orb5', dataType 'matE': 36 from 11016 points (0.33%)
Subselected from dataset 'pol', dataType 'pol': 1 from 3 points (33.33%)
Subselected from dataset 'ADM', dataType 'ADM': 52 from 14764 points (0.35%)
# Plot from Xarray vs. full dataset
# data.data['subset']['ADM'].where(ADMX['K']>0).real.squeeze().plot.line(x='t');
data.data['subset']['ADM'].real.squeeze().plot.line(x='t', marker = 'x', linestyle='dashed');
data.data['ADM']['ADM'].real.squeeze().plot.line(x='t');
7.3. Compute AF-\(\beta_{LM}\) and simulate data#
With all the components set, some observables can be calculated. For testing, we’ll also use this to simulate an experiemental trace…
Here we’ll use self.afblmMatEfit(), which is also the main fitting routine, and essentially wraps epsproc.afblmXprod() to compute AF-\(\beta_{LM}\)s (for more details, see the ePSproc method development docs).
If called without reference data, the method returns computed AF-\(\beta_{LM}\)s based on the input subsets already created, and also a set of (product) basis functions generated - these can be examined to get a feel for the sensitivity of the geometric part of the problem, and will also be used in fitting to limit repetitive computation.
7.3.1. Compute AF-\(\beta_{LM}\)s#
# data.afblmMatEfit(data = None) # OK
BetaNormX, basis = data.afblmMatEfit() # OK, uses default polarizations & ADMs as set in data['subset']
# BetaNormX, basis = data.afblmMatEfit(ADM = data.data['subset']['ADM']) # OK, but currently using default polarizations
# BetaNormX, basis = data.afblmMatEfit(ADM = data.data['subset']['ADM'], pol = data.data['pol']['pol'].sel(Labels=['x']))
# BetaNormX, basis = data.afblmMatEfit(ADM = data.data['subset']['ADM'], pol = data.data['pol']['pol'].sel(Labels=['x','y'])) # This fails for a single label...?
# BetaNormX, basis = data.afblmMatEfit(RX=data.data['pol']['pol']) # This currently fails, need to check for consistency in ep.sphCalc.WDcalc()
# - looks like set values and inputs are not consistent in this case? Not passing angs correctly, or overriding?
# - See also recently-added sfError flag, which may cause additional problems.
7.3.2. AF-\(\beta_{LM}\)s#
The returned objects contain the \(\beta_{LM}\) parameters as an Xarray…
# Line-plot with Xarray/Matplotlib
# Note there is no filtering here, so this includes some invalid and null terms
BetaNormX.sel(Labels='A').real.squeeze().plot.line(x='t');
… and the basis sets as a dictionary.
basis.keys()
dict_keys(['BLMtableResort', 'polProd', 'phaseConvention', 'BLMRenorm'])
7.4. Fitting the data#
In order to fit data, and extract matrix elements from an experimental case, we’ll use the lmfit library. This wraps core Scipy fitting routines with additional objects and methods, and is further wrapped for this specific class of problems in pemtkFit class we’re using here.
7.4.1. Set the data to fit#
Here we’ll use the values calculated above as our test data. This currently needs to be set as self.data['subset']['AFBLM'] for fitting.
# data.data['subset']['AFBLM'] = BetaNormX # Set manually
data.setData('sim', BetaNormX) # Set simulated data to master structure as "sim"
data.setSubset('sim','AFBLM') # Set to 'subset' to use for fitting.
Subselected from dataset 'sim', dataType 'AFBLM': 156 from 156 points (100.00%)
# Set basis functions
data.basis = basis
7.4.2. Adding noise#
# Add noise with np.random.normal
# https://numpy.org/doc/stable/reference/random/generated/numpy.random.normal.html
# data.data['subset']['AFBLM']
import numpy as np
mu, sigma = 0, 0.05 # Up to approx 10% noise (+/- 0.05)
# creating a noise with the same dimension as the dataset (2,2)
noise = np.random.normal(mu, sigma, [data.data['subset']['AFBLM'].t.size, data.data['subset']['AFBLM'].l.size])
# data.BLMfitPlot()
# Set noise in Xarray & scale by l
import xarray as xr
noiseXR = xr.ones_like(data.data['subset']['AFBLM']) * noise
# data.data['subset']['AFBLM']['noise'] = ((data.data['subset']['AFBLM'].t, data.data['subset']['AFBLM'].l), noise)
# xr.where(noiseXR.l>0, noiseXR/noiseXR.l, noiseXR)
noiseXR = noiseXR.where(noiseXR.l<1, noiseXR/(noiseXR.l)) # Scale by L
data.data['subset']['AFBLM'] = data.data['subset']['AFBLM'] + noiseXR
data.data['subset']['AFBLM'] = data.data['subset']['AFBLM'].where(data.data['subset']['AFBLM'].m == 0, 0)
data.BLMfitPlot()
Dataset: subset, AFBLM
7.4.3. Setting up the fit parameters#
In this case, we can work from the existing matrix elements to speed up parameter creation, although in practice this may need to be approached ab initio - nonetheless, the method will be the same, and the ab initio case detailed later.
# Input set, as defined earlier
# data.data['subset']['matE'].pd
# Set matrix elements from ab initio results
data.setMatEFit(data.data['subset']['matE'])
Set 6 complex matrix elements to 12 fitting params, see self.params for details.
Auto-setting parameters.
| name | value | initial value | min | max | vary | expression |
|---|---|---|---|---|---|---|
| m_PU_SG_PU_1_n1_1_1 | 1.78461575 | 1.7846157536101068 | 1.0000e-04 | 5.00000000 | True | |
| m_PU_SG_PU_1_1_n1_1 | 1.78461575 | 1.7846157536101068 | 1.0000e-04 | 5.00000000 | False | m_PU_SG_PU_1_n1_1_1 |
| m_PU_SG_PU_3_n1_1_1 | 0.80290495 | 0.802904951323892 | 1.0000e-04 | 5.00000000 | True | |
| m_PU_SG_PU_3_1_n1_1 | 0.80290495 | 0.802904951323892 | 1.0000e-04 | 5.00000000 | False | m_PU_SG_PU_3_n1_1_1 |
| m_SU_SG_SU_1_0_0_1 | 2.68606212 | 2.686062120382649 | 1.0000e-04 | 5.00000000 | True | |
| m_SU_SG_SU_3_0_0_1 | 1.10915311 | 1.109153108617096 | 1.0000e-04 | 5.00000000 | True | |
| p_PU_SG_PU_1_n1_1_1 | -0.86104140 | -0.8610414024232179 | -3.14159265 | 3.14159265 | False | |
| p_PU_SG_PU_1_1_n1_1 | -0.86104140 | -0.8610414024232179 | -3.14159265 | 3.14159265 | False | p_PU_SG_PU_1_n1_1_1 |
| p_PU_SG_PU_3_n1_1_1 | -3.12044446 | -3.1204444620772467 | -3.14159265 | 3.14159265 | True | |
| p_PU_SG_PU_3_1_n1_1 | -3.12044446 | -3.1204444620772467 | -3.14159265 | 3.14159265 | False | p_PU_SG_PU_3_n1_1_1 |
| p_SU_SG_SU_1_0_0_1 | 2.61122920 | 2.611229196458127 | -3.14159265 | 3.14159265 | True | |
| p_SU_SG_SU_3_0_0_1 | -0.07867828 | -0.07867827542158025 | -3.14159265 | 3.14159265 | True |
This sets self.params from the matrix elements, which are a set of (real) parameters for lmfit, as a Parameters object.
Note that:
The input matrix elements are converted to magnitude-phase form, hence there are twice the number as the input array, and labelled
morpaccordingly, along with a name based on the full set of QNs/indexes set.One phase is set to
vary=False, which defines a reference phase. This defaults to the first phase item.Min and max values are defined, by default the ranges are 1e-4<mag<5, -pi<phase<pi.
Relationships between the parameters are set by default, but can be set manually, see section below, or pass
paramsCons=Noneto skip.
7.4.4. Running fits#
7.4.4.1. Single fit#
With the parameters and data set, just call self.fit()!
Statistics and outputs are handled by lmfit, which includes uncertainty estimates and correlations in the fitted parameters.
# data.randomizeParams() # Randomize input parameters if desired
# For method testing using known initial params is also useful
data.fit()
# Check fit outputs - self.result shows results from the last fit
data.result
Show code cell output
Fit Statistics
| fitting method | leastsq | |
| # function evals | 1320 | |
| # data points | 156 | |
| # variables | 7 | |
| chi-square | 0.01615077 | |
| reduced chi-square | 1.0839e-04 | |
| Akaike info crit. | -1417.40039 | |
| Bayesian info crit. | -1396.05140 |
Variables
| name | value | standard error | relative error | initial value | min | max | vary | expression |
|---|---|---|---|---|---|---|---|---|
| m_PU_SG_PU_1_n1_1_1 | 1.81863008 | 0.05116438 | (2.81%) | 1.7846157536101068 | 1.0000e-04 | 5.00000000 | True | |
| m_PU_SG_PU_1_1_n1_1 | 1.81863008 | 0.05116438 | (2.81%) | 1.7846157536101068 | 1.0000e-04 | 5.00000000 | False | m_PU_SG_PU_1_n1_1_1 |
| m_PU_SG_PU_3_n1_1_1 | 0.78365089 | 0.12317226 | (15.72%) | 0.802904951323892 | 1.0000e-04 | 5.00000000 | True | |
| m_PU_SG_PU_3_1_n1_1 | 0.78365089 | 0.12317225 | (15.72%) | 0.802904951323892 | 1.0000e-04 | 5.00000000 | False | m_PU_SG_PU_3_n1_1_1 |
| m_SU_SG_SU_1_0_0_1 | 2.67947346 | 0.06072913 | (2.27%) | 2.686062120382649 | 1.0000e-04 | 5.00000000 | True | |
| m_SU_SG_SU_3_0_0_1 | 1.15266253 | 0.14480693 | (12.56%) | 1.109153108617096 | 1.0000e-04 | 5.00000000 | True | |
| p_PU_SG_PU_1_n1_1_1 | -0.86104140 | 0.00000000 | (0.00%) | -0.8610414024232179 | -3.14159265 | 3.14159265 | False | |
| p_PU_SG_PU_1_1_n1_1 | -0.86104140 | 0.00000000 | (0.00%) | -0.8610414024232179 | -3.14159265 | 3.14159265 | False | p_PU_SG_PU_1_n1_1_1 |
| p_PU_SG_PU_3_n1_1_1 | -3.14159265 | 0.26152765 | (8.32%) | -3.1204444620772467 | -3.14159265 | 3.14159265 | True | |
| p_PU_SG_PU_3_1_n1_1 | -3.14159265 | 0.26152765 | (8.32%) | -3.1204444620772467 | -3.14159265 | 3.14159265 | False | p_PU_SG_PU_3_n1_1_1 |
| p_SU_SG_SU_1_0_0_1 | 2.66958866 | 0.13105172 | (4.91%) | 2.611229196458127 | -3.14159265 | 3.14159265 | True | |
| p_SU_SG_SU_3_0_0_1 | 0.16332776 | 0.09525204 | (58.32%) | -0.07867827542158025 | -3.14159265 | 3.14159265 | True |
Correlations (unreported correlations are < 0.100)
| m_PU_SG_PU_1_n1_1_1 | p_SU_SG_SU_1_0_0_1 | +0.9725 |
| m_PU_SG_PU_1_n1_1_1 | m_PU_SG_PU_3_n1_1_1 | -0.9647 |
| m_SU_SG_SU_1_0_0_1 | m_SU_SG_SU_3_0_0_1 | -0.9630 |
| m_PU_SG_PU_3_n1_1_1 | p_SU_SG_SU_1_0_0_1 | -0.9433 |
| m_SU_SG_SU_1_0_0_1 | p_SU_SG_SU_1_0_0_1 | +0.8318 |
| m_SU_SG_SU_3_0_0_1 | p_SU_SG_SU_1_0_0_1 | -0.8029 |
| p_PU_SG_PU_3_n1_1_1 | p_SU_SG_SU_3_0_0_1 | +0.7354 |
| m_PU_SG_PU_1_n1_1_1 | m_SU_SG_SU_1_0_0_1 | +0.6905 |
| m_PU_SG_PU_3_n1_1_1 | m_SU_SG_SU_1_0_0_1 | -0.6741 |
| m_PU_SG_PU_1_n1_1_1 | m_SU_SG_SU_3_0_0_1 | -0.6705 |
| m_PU_SG_PU_3_n1_1_1 | m_SU_SG_SU_3_0_0_1 | +0.5872 |
| m_PU_SG_PU_3_n1_1_1 | p_PU_SG_PU_3_n1_1_1 | -0.5076 |
| p_PU_SG_PU_3_n1_1_1 | p_SU_SG_SU_1_0_0_1 | +0.4787 |
| m_PU_SG_PU_1_n1_1_1 | p_PU_SG_PU_3_n1_1_1 | +0.4280 |
| m_PU_SG_PU_3_n1_1_1 | p_SU_SG_SU_3_0_0_1 | -0.4227 |
| m_SU_SG_SU_1_0_0_1 | p_PU_SG_PU_3_n1_1_1 | +0.3819 |
| m_PU_SG_PU_1_n1_1_1 | p_SU_SG_SU_3_0_0_1 | +0.3771 |
| p_SU_SG_SU_1_0_0_1 | p_SU_SG_SU_3_0_0_1 | +0.3208 |
| m_SU_SG_SU_3_0_0_1 | p_PU_SG_PU_3_n1_1_1 | -0.2783 |
# Plot results with data
# data.BLMfitPlot(backend='hv')
data.BLMfitPlot()
Dataset: subset, AFBLM
Dataset: 0, AFBLM
7.4.4.2. Extended execution methods, including parallel and batched execution#
(1) serial execution
Either:
Manually with a loop.
With
self.multiFit()method, although this is optimised for parallel execution (see below).
import time
start = time.time()
# Maual execution
for n in range(0,10):
data.randomizeParams()
data.fit()
end = time.time()
print((end - start)/60)
# Or run with self.multiFit(parallel = False)
# data.multiFit(nRange = [0,100], parallel = False)
0.7577979763348898
# We now have 10 fit results
data.data.keys()
dict_keys(['subset', 'orb6', 'orb5', 'ADM', 'pol', 'sim', 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
(b) parallel execution
Updated version including parallel fitting routine with self.multiFit() method.
This currently uses the XYZpy library for quick parallelization, although there is some additional setup overhead in the currently implementation due to class init per fit batch. The default aims to set ~90% CPU usage, based on core-count.
# Multifit wrapper with range of fits specified
# Set 'num_workers' to override the default.
data.multiFit(nRange = [0,10], num_workers=20)
Number of processors: 64
Running pool on: 20
* sparse not found, sparse matrix forms not available.
* natsort not found, some sorting functions not available.
* Setting plotter defaults with epsproc.basicPlotters.setPlotters(). Run directly to modify, or change options in local env.
* Set Holoviews with bokeh.
* pyevtk not found, VTK export not available.
* sparse not found, sparse matrix forms not available.
* natsort not found, some sorting functions not available.
* Setting plotter defaults with epsproc.basicPlotters.setPlotters(). Run directly to modify, or change options in local env.
* Set Holoviews with bokeh.
* pyevtk not found, VTK export not available.
* sparse not found, sparse matrix forms not available.
* natsort not found, some sorting functions not available.
* Setting plotter defaults with epsproc.basicPlotters.setPlotters(). Run directly to modify, or change options in local env.
* Set Holoviews with bokeh.
* pyevtk not found, VTK export not available.
* sparse not found, sparse matrix forms not available.
* natsort not found, some sorting functions not available.
* Setting plotter defaults with epsproc.basicPlotters.setPlotters(). Run directly to modify, or change options in local env.
* Set Holoviews with bokeh.
* pyevtk not found, VTK export not available.
* sparse not found, sparse matrix forms not available.
* natsort not found, some sorting functions not available.
* Setting plotter defaults with epsproc.basicPlotters.setPlotters(). Run directly to modify, or change options in local env.
* Set Holoviews with bokeh.
* pyevtk not found, VTK export not available.
* sparse not found, sparse matrix forms not available.
* natsort not found, some sorting functions not available.
* Setting plotter defaults with epsproc.basicPlotters.setPlotters(). Run directly to modify, or change options in local env.
* Set Holoviews with bokeh.
* pyevtk not found, VTK export not available.
* sparse not found, sparse matrix forms not available.
* natsort not found, some sorting functions not available.
* Setting plotter defaults with epsproc.basicPlotters.setPlotters(). Run directly to modify, or change options in local env.
* Set Holoviews with bokeh.
* pyevtk not found, VTK export not available.
* sparse not found, sparse matrix forms not available.
* natsort not found, some sorting functions not available.
* Setting plotter defaults with epsproc.basicPlotters.setPlotters(). Run directly to modify, or change options in local env.
* Set Holoviews with bokeh.
* pyevtk not found, VTK export not available.
0%| | 0/10 [00:00<?, ?it/s]OMP: Info #271: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
OMP: Info #271: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
OMP: Info #271: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
OMP: Info #271: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
OMP: Info #271: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
OMP: Info #271: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
OMP: Info #271: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
OMP: Info #271: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
OMP: Info #271: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
OMP: Info #271: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
100%|#####################################################################################################| 10/10 [00:46<00:00, 4.62s/it]
(c) Dump data
Various options are available. The most complete is to use Pickle (default case), although this is not suggested for archival use. For details see https://epsproc.readthedocs.io/en/dev/dataStructures/ePSproc_dataStructures_demo_070622.html
outStem = 'dataDump_N2' # Set for file save later
# data.writeFitData(fName='N2_datadump') # Use 'fName' to supply a filename
data.writeFitData(dataPath = dataPath, outStem=outStem) # Use 'outStem' to define a filename which will be appended with a timestamp
# Set dataPath if desired, otherwise will use working dir
Dumped self.data to /home/jovyan/jake-home/buildTmp/_latest_build/html/doc-source/part2/n2fitting/dataDump_N2_280423_09-40-05.pickle with pickle.
Dumped data to /home/jovyan/jake-home/buildTmp/_latest_build/html/doc-source/part2/n2fitting/dataDump_N2_280423_09-40-05.pickle with pickle.
PosixPath('/home/jovyan/jake-home/buildTmp/_latest_build/html/doc-source/part2/n2fitting/dataDump_N2_280423_09-40-05.pickle')
7.5. Quick setup with script#
The steps demonstrated above are also wrapped in a helper script, although some steps may need to be re-run to change selection properties or ranges.
# Run general config script with dataPath set above
%run {dataPath/"setup_fit_demo.py"} -d {dataPath}
Show code cell output
*** Setting up demo fitting workspace and main `data` class object...
For more details see https://pemtk.readthedocs.io/en/latest/fitting/PEMtk_fitting_basic_demo_030621-full.html
To use local source code, pass the parent path to this script at run time, e.g. "setup_fit_demo ~/github"
* Loading packages...
* Set Holoviews with bokeh.
* Loading demo matrix element data from /home/jovyan/jake-home/buildTmp/_latest_build/html/doc-source/part2/n2fitting/photoionization/n2_multiorb
*** No file(s) found matching fileIn=None, fileBase=/home/jovyan/jake-home/buildTmp/_latest_build/html/doc-source/part2/n2fitting/photoionization/n2_multiorb.
*** No file(s) found matching fileIn=None, fileBase=/home/jovyan/jake-home/buildTmp/_latest_build/html/doc-source/part2/n2fitting/photoionization/n2_multiorb.
*** WARNING: no data files found. Please check path to data file(s) is correct.
*** Job subset details
Key: subset
No 'job' info set for self.data[subset].
* Loading demo ADM data from /home/jovyan/jake-home/buildTmp/_latest_build/html/doc-source/part2/n2fitting/alignment/N2_ADM_VM_290816.mat...
*** ADM data file not found. Setup script will abort.
<Figure size 720x480 with 0 Axes>